Faster Lagrangian-Based Methods in Convex Optimization

نویسندگان

چکیده

In this paper, we aim at unifying, simplifying, and improving the convergence rate analysis of Lagrangian-based methods for convex optimization problems. We first introduce notion nice primal algorithmic map, which plays a central role in unification simplification most methods. Equipped with then versatile generic scheme, allows design faster Lagrangian (FLAG) new provably sublinear expressed terms function values feasibility violation original (nonergodic) generated sequence. To demonstrate power versatility our approach results, show that well-known iconic schemes admit map hence share results within their corresponding FLAG.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Augmented Lagrangian Methods and Proximal Point Methods for Convex Optimization

We present a review of the classical proximal point method for nding zeroes of maximal monotone operators, and its application to augmented Lagrangian methods, including a rather complete convergence analysis. Next we discuss the generalized proximal point methods, either with Bregman distances or -divergences, which in turn give raise to a family of generalized augmented Lagrangians, as smooth...

متن کامل

Lagrangian Transformation and Interior Ellipsoid Methods in Convex Optimization

The rediscovery of the affine scaling method in the late 80s was one of the turning points which led to a new chapter in Modern Optimization the Interior Point Methods (IPMs). The purpose of this paper is to show the intrinsic connections between Interior and Exterior Point methods (EPMs), which have been developed during the last 30 years. A class Ψ of smooth and strictly concave functions ψ :...

متن کامل

Convex Optimization and Lagrangian Duality

Finally the Lagranage dual function is given by g(~λ, ~ν) = inf~x L(~x,~λ, ~ν) We now make a couple of simple observations. Observation. When L(·, ~λ, ~ν) is unbounded from below then the dual takes the value −∞. Observation. g(~λ, ~ν) is concave1 as it is the infimum of a set of affine2 functions. If x is feasible solution of program (10.2)(10.4), then we have the following L(x,~λ, ~ν) = f0(x)...

متن کامل

Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

In this paper, a new theory is developed for firstorder stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function F (w) in the -sublevel set grows as fast as ‖w − w∗‖ 2 , where w∗ represents the closest optimal solution t...

متن کامل

Variance Reduction for Faster Non-Convex Optimization

We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in O(1/ε) iterations for smooth objectives, and stochastic gradient descent that converges ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Siam Journal on Optimization

سال: 2022

ISSN: ['1095-7189', '1052-6234']

DOI: https://doi.org/10.1137/20m1375358